Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 20
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
JBI Evid Synth ; 19(11): 3183-3189, 2021 11.
Artigo em Inglês | MEDLINE | ID: mdl-34230445

RESUMO

OBJECTIVE: The objective of this review is to explore and understand women's experiences of living with obesity during the perinatal period to support evidence-informed approaches to care. INTRODUCTION: The rising incidence of maternal obesity is a serious global health problem. Qualitative studies exploring the viewpoints of pregnant women living with obesity have shown that some women report negative experiences associated with pregnancy, with some instances of current care management practices being perceived as confronting, judgmental, and generally unhelpful. Synthesizing qualitative findings about the experiences of pregnant and postpartum women who live with obesity can provide important insights into the general needs of this population and current gaps in health care practice. INCLUSION CRITERIA: All settings in which women who live with obesity during their pregnancies and receive health care for pregnancy, birthing, and postpartum care will be considered. Studies published from 1995 onward will be included. The review will consider all studies that present qualitative data including, but not limited to, designs such as phenomenology, grounded theory, ethnography, action research, and feminist research. METHODS: The following databases will be searched for this review: CINAHL (EBSCO), Embase (Elsevier), PsycINFO (EBSCO), MEDLINE (Ovid), and Sociological Abstracts (ProQuest). ProQuest Dissertations and Theses will be searched for unpublished studies. Each study will be assessed by two independent reviewers. Any disagreements will be resolved through discussion. Data extraction will be conducted by two independent reviewers. The JBI resources for meta-aggregation will be used to create categories and synthesized findings. SYSTEMATIC REVIEW REGISTRATION NUMBER: PROSPERO CRD42020214762.


Assuntos
Antropologia Cultural , Período Pós-Parto , Atenção à Saúde , Feminino , Humanos , Obesidade/terapia , Gravidez , Pesquisa Qualitativa , Revisões Sistemáticas como Assunto
2.
Multisens Res ; 31(1-2): 111-144, 2018 Jan 01.
Artigo em Inglês | MEDLINE | ID: mdl-31264597

RESUMO

Since its discovery 40 years ago, the McGurk illusion has been usually cited as a prototypical paradigmatic case of multisensory binding in humans, and has been extensively used in speech perception studies as a proxy measure for audiovisual integration mechanisms. Despite the well-established practice of using the McGurk illusion as a tool for studying the mechanisms underlying audiovisual speech integration, the magnitude of the illusion varies enormously across studies. Furthermore, the processing of McGurk stimuli differs from congruent audiovisual processing at both phenomenological and neural levels. This questions the suitability of this illusion as a tool to quantify the necessary and sufficient conditions under which audiovisual integration occurs in natural conditions. In this paper, we review some of the practical and theoretical issues related to the use of the McGurk illusion as an experimental paradigm. We believe that, without a richer understanding of the mechanisms involved in the processing of the McGurk effect, experimenters should be really cautious when generalizing data generated by McGurk stimuli to matching audiovisual speech events.

3.
Atten Percept Psychophys ; 80(1): 27-41, 2018 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-29080047

RESUMO

When engaging in conversation, we efficiently go back and forth with our partner, organizing our contributions in reciprocal turn-taking behavior. Using multiple auditory and visual cues, we make online decisions about when it is the appropriate time to take our turn. In two experiments, we demonstrated, for the first time, that auditory and visual information serve complementary roles when making such turn-taking decisions. We presented clips of single utterances spoken by individuals engaged in conversations in audiovisual, auditory-only or visual-only modalities. These utterances occurred either right before a turn exchange (i.e., 'Turn-Ends') or right before the next sentence spoken by the same talker (i.e., 'Turn-Continuations'). In Experiment 1, participants discriminated between Turn-Ends and Turn-Continuations in order to synchronize a button-press response to the moment the talker would stop speaking. We showed that participants were best at discriminating between Turn-Ends and Turn-Continuations in the audiovisual condition. However, in terms of response synchronization, participants were equally precise at timing their responses to a Turn-End in the audiovisual and auditory-only conditions, showing no advantage of visual information. In Experiment 2, we used a gating paradigm, where increasing segments of Turns-Ends and Turn-Continuations were presented, and participants predicted if a turn exchange would occur at the end of the sentence. We found an audiovisual advantage in detecting an upcoming turn early in the perception of a turn exchange. Together, these results suggest that visual information functions as an early signal indicating an upcoming turn exchange while auditory information is used to precisely time a response to the turn end.


Assuntos
Tomada de Decisões , Amigos/psicologia , Habilidades Sociais , Percepção da Fala/fisiologia , Percepção Visual/fisiologia , Adulto , Sinais (Psicologia) , Feminino , Humanos , Masculino , Adulto Jovem
4.
J Acoust Soc Am ; 142(2): 838, 2017 08.
Artigo em Inglês | MEDLINE | ID: mdl-28863596

RESUMO

Previous research has shown that speakers can adapt their speech in a flexible manner as a function of a variety of contextual and task factors. While it is known that speech tasks may play a role in speech motor behavior, it remains to be explored if the manner in which the speaking action is initiated can modify low-level, automatic control of vocal motor action. In this study, the nature (linguistic vs non-linguistic) and modality (auditory vs visual) of the go signal (i.e., the prompts) was manipulated in an otherwise identical vocal production task. Participants were instructed to produce the word "head" when prompted, and the auditory feedback they were receiving was altered by systematically changing the first formants of the vowel /ε/ in real time using a custom signal processing system. Linguistic prompts induced greater corrective behaviors to the acoustic perturbations than non-linguistic prompts. This suggests that the accepted variance for the intended speech sound decreases when external linguistic templates are provided to the speaker. Overall, this result shows that the automatic correction of vocal errors is influenced by flexible, context-dependant mechanisms.


Assuntos
Retroalimentação Sensorial , Linguística , Acústica da Fala , Percepção da Fala , Qualidade da Voz , Estimulação Acústica , Acústica , Adolescente , Adulto , Limiar Auditivo , Feminino , Humanos , Masculino , Estimulação Luminosa , Processamento de Sinais Assistido por Computador , Medida da Produção da Fala , Percepção Visual , Adulto Jovem
5.
J Speech Lang Hear Res ; 59(4): 601-15, 2016 08 01.
Artigo em Inglês | MEDLINE | ID: mdl-27537379

RESUMO

PURPOSE: The aim of this article is to examine the effects of visual image degradation on performance and gaze behavior in audiovisual and visual-only speech perception tasks. METHOD: We presented vowel-consonant-vowel utterances visually filtered at a range of frequencies in visual-only, audiovisual congruent, and audiovisual incongruent conditions (Experiment 1; N = 66). In Experiment 2 (N = 20), participants performed a visual-only speech perception task and in Experiment 3 (N = 20) an audiovisual task while having their gaze behavior monitored using eye-tracking equipment. RESULTS: In the visual-only condition, increasing image resolution led to monotonic increases in performance, and proficient speechreaders were more affected by the removal of high spatial information than were poor speechreaders. The McGurk effect also increased with increasing visual resolution, although it was less affected by the removal of high-frequency information. Observers tended to fixate on the mouth more in visual-only perception, but gaze toward the mouth did not correlate with accuracy of silent speechreading or the magnitude of the McGurk effect. CONCLUSIONS: The results suggest that individual differences in silent speechreading and the McGurk effect are not related. This conclusion is supported by differential influences of high-resolution visual information on the 2 tasks and differences in the pattern of gaze.


Assuntos
Movimentos Oculares , Leitura Labial , Percepção da Fala , Percepção Visual , Análise de Variância , Medições dos Movimentos Oculares , Movimentos Oculares/fisiologia , Feminino , Humanos , Masculino , Adulto Jovem
6.
Atten Percept Psychophys ; 78(5): 1472-87, 2016 07.
Artigo em Inglês | MEDLINE | ID: mdl-27150616

RESUMO

The basis for individual differences in the degree to which visual speech input enhances comprehension of acoustically degraded speech is largely unknown. Previous research indicates that fine facial detail is not critical for visual enhancement when auditory information is available; however, these studies did not examine individual differences in ability to make use of fine facial detail in relation to audiovisual speech perception ability. Here, we compare participants based on their ability to benefit from visual speech information in the presence of an auditory signal degraded with noise, modulating the resolution of the visual signal through low-pass spatial frequency filtering and monitoring gaze behavior. Participants who benefited most from the addition of visual information (high visual gain) were more adversely affected by the removal of high spatial frequency information, compared to participants with low visual gain, for materials with both poor and rich contextual cues (i.e., words and sentences, respectively). Differences as a function of gaze behavior between participants with the highest and lowest visual gains were observed only for words, with participants with the highest visual gain fixating longer on the mouth region. Our results indicate that the individual variance in audiovisual speech in noise performance can be accounted for, in part, by better use of fine facial detail information extracted from the visual signal and increased fixation on mouth regions for short stimuli. Thus, for some, audiovisual speech perception may suffer when the visual input (in addition to the auditory signal) is less than perfect.


Assuntos
Estimulação Acústica/métodos , Estimulação Luminosa/métodos , Percepção da Fala , Percepção Visual , Adolescente , Adulto , Compreensão , Sinais (Psicologia) , Feminino , Fixação Ocular , Humanos , Individualidade , Masculino , Ruído , Processamento Espacial , Adulto Jovem
7.
Neuropsychologia ; 75: 402-10, 2015 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-26100561

RESUMO

Seeing a speaker's facial gestures can significantly improve speech comprehension, especially in noisy environments. However, the nature of the visual information from the speaker's facial movements that is relevant for this enhancement is still unclear. Like auditory speech signals, visual speech signals unfold over time and contain both dynamic configural information and luminance-defined local motion cues; two information sources that are thought to engage anatomically and functionally separate visual systems. Whereas, some past studies have highlighted the importance of local, luminance-defined motion cues in audiovisual speech perception, the contribution of dynamic configural information signalling changes in form over time has not yet been assessed. We therefore attempted to single out the contribution of dynamic configural information to audiovisual speech processing. To this aim, we measured word identification performance in noise using unimodal auditory stimuli, and with audiovisual stimuli. In the audiovisual condition, speaking faces were presented as point light displays achieved via motion capture of the original talker. Point light displays could be isoluminant, to minimise the contribution of effective luminance-defined local motion information, or with added luminance contrast, allowing the combined effect of dynamic configural cues and local motion cues. Audiovisual enhancement was found in both the isoluminant and contrast-based luminance conditions compared to an auditory-only condition, demonstrating, for the first time the specific contribution of dynamic configural cues to audiovisual speech improvement. These findings imply that globally processed changes in a speaker's facial shape contribute significantly towards the perception of articulatory gestures and the analysis of audiovisual speech.


Assuntos
Gestos , Percepção da Fala , Percepção Visual , Estimulação Acústica , Adulto , Sinais (Psicologia) , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Ruído , Estimulação Luminosa , Adulto Jovem
8.
Front Psychol ; 5: 727, 2014.
Artigo em Inglês | MEDLINE | ID: mdl-25076922

RESUMO

Seeing articulatory movements influences perception of auditory speech. This is often reflected in a shortened latency of auditory event-related potentials (ERPs) generated in the auditory cortex. The present study addressed whether this early neural correlate of audiovisual interaction is modulated by attention. We recorded ERPs in 15 subjects while they were presented with auditory, visual, and audiovisual spoken syllables. Audiovisual stimuli consisted of incongruent auditory and visual components known to elicit a McGurk effect, i.e., a visually driven alteration in the auditory speech percept. In a Dual task condition, participants were asked to identify spoken syllables whilst monitoring a rapid visual stream of pictures for targets, i.e., they had to divide their attention. In a Single task condition, participants identified the syllables without any other tasks, i.e., they were asked to ignore the pictures and focus their attention fully on the spoken syllables. The McGurk effect was weaker in the Dual task than in the Single task condition, indicating an effect of attentional load on audiovisual speech perception. Early auditory ERP components, N1 and P2, peaked earlier to audiovisual stimuli than to auditory stimuli when attention was fully focused on syllables, indicating neurophysiological audiovisual interaction. This latency decrement was reduced when attention was loaded, suggesting that attention influences early neural processing of audiovisual speech. We conclude that reduced attention weakens the interaction between vision and audition in speech.

9.
Brain Lang ; 126(3): 253-62, 2013 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-23872285

RESUMO

Neuroimaging studies of audiovisual speech processing have exclusively addressed listeners' native language (L1). Yet, several behavioural studies now show that AV processing plays an important role in non-native (L2) speech perception. The current fMRI study measured brain activity during auditory, visual, audiovisual congruent and audiovisual incongruent utterances in L1 and L2. BOLD responses to congruent AV speech in the pSTS were stronger than in either unimodal condition in both L1 and L2. Yet no differences in AV processing were expressed according to the language background in this area. Instead, the regions in the bilateral occipital lobe had a stronger congruency effect on the BOLD response (congruent higher than incongruent) in L2 as compared to L1. According to these results, language background differences are predominantly expressed in these unimodal regions, whereas the pSTS is similarly involved in AV integration regardless of language dominance.


Assuntos
Percepção Auditiva/fisiologia , Idioma , Multilinguismo , Percepção da Fala/fisiologia , Percepção Visual/fisiologia , Adulto , Feminino , Humanos , Masculino , Pessoa de Meia-Idade
10.
Psychol Sci ; 24(4): 423-31, 2013 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-23462756

RESUMO

Mounting physiological and behavioral evidence has shown that the detectability of a visual stimulus can be enhanced by a simultaneously presented sound. The mechanisms underlying these cross-sensory effects, however, remain largely unknown. Using continuous flash suppression (CFS), we rendered a complex, dynamic visual stimulus (i.e., a talking face) consciously invisible to participants. We presented the visual stimulus together with a suprathreshold auditory stimulus (i.e., a voice speaking a sentence) that either matched or mismatched the lip movements of the talking face. We compared how long it took for the talking face to overcome interocular suppression and become visible to participants in the matched and mismatched conditions. Our results showed that the detection of the face was facilitated by the presentation of a matching auditory sentence, in comparison with the presentation of a mismatching sentence. This finding indicates that the registration of audiovisual correspondences occurs at an early stage of processing, even when the visual information is blocked from conscious awareness.


Assuntos
Conscientização/fisiologia , Percepção da Fala/fisiologia , Percepção Visual/fisiologia , Estimulação Acústica , Estado de Consciência/fisiologia , Feminino , Humanos , Inibição Psicológica , Masculino , Estimulação Luminosa , Detecção de Sinal Psicológico , Adulto Jovem
11.
PLoS One ; 6(10): e25198, 2011.
Artigo em Inglês | MEDLINE | ID: mdl-21998642

RESUMO

Speech perception often benefits from vision of the speaker's lip movements when they are available. One potential mechanism underlying this reported gain in perception arising from audio-visual integration is on-line prediction. In this study we address whether the preceding speech context in a single modality can improve audiovisual processing and whether this improvement is based on on-line information-transfer across sensory modalities. In the experiments presented here, during each trial, a speech fragment (context) presented in a single sensory modality (voice or lips) was immediately continued by an audiovisual target fragment. Participants made speeded judgments about whether voice and lips were in agreement in the target fragment. The leading single sensory context and the subsequent audiovisual target fragment could be continuous in either one modality only, both (context in one modality continues into both modalities in the target fragment) or neither modalities (i.e., discontinuous). The results showed quicker audiovisual matching responses when context was continuous with the target within either the visual or auditory channel (Experiment 1). Critically, prior visual context also provided an advantage when it was cross-modally continuous (with the auditory channel in the target), but auditory to visual cross-modal continuity resulted in no advantage (Experiment 2). This suggests that visual speech information can provide an on-line benefit for processing the upcoming auditory input through the use of predictive mechanisms. We hypothesize that this benefit is expressed at an early level of speech analysis.


Assuntos
Modelos Biológicos , Percepção da Fala/fisiologia , Face , Feminino , Humanos , Masculino , Estimulação Luminosa , Som , Visão Ocular/fisiologia , Adulto Jovem
12.
Exp Brain Res ; 213(2-3): 175-83, 2011 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-21431431

RESUMO

A critical question in multisensory processing is how the constant information flow that arrives to our different senses is organized in coherent representations. Some authors claim that pre-attentive detection of inter-sensory correlations supports crossmodal binding, whereas other findings indicate that attention plays a crucial role. We used visual and auditory search tasks for speaking faces to address the role of selective spatial attention in audiovisual binding. Search efficiency amongst faces for the match with a voice declined with the number of faces being monitored concurrently, consistent with an attentive search mechanism. In contrast, search amongst auditory speech streams for the match with a face was independent of the number of streams being monitored concurrently, as long as localization was not required. We suggest that the fundamental differences in the way in which auditory and visual information is encoded play a limiting role in crossmodal binding. Based on these unisensory limitations, we provide a unified explanation for several previous apparently contradictory findings.


Assuntos
Atenção/fisiologia , Percepção Auditiva/fisiologia , Detecção de Sinal Psicológico/fisiologia , Percepção Visual/fisiologia , Estimulação Acústica/métodos , Feminino , Humanos , Masculino , Estimulação Luminosa/métodos , Tempo de Reação/fisiologia , Sensibilidade e Especificidade , Localização de Som/fisiologia , Fatores de Tempo , Gravação em Vídeo , Adulto Jovem
13.
Brain Res ; 1323: 84-93, 2010 Apr 06.
Artigo em Inglês | MEDLINE | ID: mdl-20117103

RESUMO

To what extent does our prior experience with the correspondence between audiovisual stimuli influence how we subsequently bind them? We addressed this question by testing English and Spanish speakers (having little prior experience of Spanish and English, respectively) on a crossmodal simultaneity judgment (SJ) task with English or Spanish spoken sentences. The results revealed that the visual speech stream had to lead the auditory speech stream by a significantly larger interval in the participants' native language than in the non-native language for simultaneity to be perceived. Critically, the difference in temporal processing between perceiving native vs. non-native language tends to disappear as the amount of experience with the non-native language increases. We propose that this modulation of multisensory temporal processing as a function of prior experience is a consequence of the constraining role that visual information plays in the temporal alignment of audiovisual speech signals.


Assuntos
Percepção da Fala/fisiologia , Fala , Percepção Visual/fisiologia , Estimulação Acústica , Feminino , Humanos , Masculino , Estimulação Luminosa
14.
J Exp Psychol Hum Percept Perform ; 35(2): 580-7, 2009 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-19331510

RESUMO

Cross-modal illusions such as the McGurk-MacDonald effect have been used to illustrate the automatic, encapsulated nature of multisensory integration. This characterization is based in the widespread assumption that the illusory percept arising from intersensory conflict reflects only the end-product of the multisensory integration process, with the mismatch between the original unisensory events remaining largely hidden from awareness. Here the authors show that when presented with desynchronized audiovisual speech syllables, observers are often able to detect the temporal mismatch while experiencing the McGurk-MacDonald illusion. Thus, contrary to previous assumptions, it seems possible to gain access to information about the individual sensory components of a multisensory (integrated) percept. On the basis of this and similar findings, the authors argue that multisensory integration is a multifaceted process during which different attributes of the (multisensory) object might be bound by different mechanisms and possibly at different times. This proposal contrasts with classic conceptions of multisensory integration as a homogeneous process whereby all the attributes of a multisensory event are treated in a unified manner.


Assuntos
Atenção , Ilusões , Percepção da Fala , Percepção do Tempo , Estimulação Acústica , Humanos , Fonética , Estimulação Luminosa , Valores de Referência , Fatores de Tempo , Gravação de Videoteipe
15.
Exp Brain Res ; 184(4): 533-46, 2008 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-17885751

RESUMO

Participants presented with auditory, visual, or bimodal audiovisual stimuli in a speeded discrimination task, fail to respond to the auditory component of bimodal targets significantly more often than to the visual component, a phenomenon known as the Colavita visual dominance effect. Given that spatial and temporal factors have recently been shown to modulate the Colavita effect, the aim of the present study, was to investigate whether semantic congruency also modulates the effect. In the three experiments reported here, participants were presented with a version of the Colavita task in which the stimulus congruency between the auditory and visual components of the bimodal targets was manipulated. That is, the auditory and visual stimuli could refer to the same or different object (in Experiments 1 and 2) or audiovisual speech event (Experiment 3). Surprisingly, semantic/stimulus congruency had no effect on the magnitude of the Colavita effect in any of the experiments, although it exerted a significant effect on certain other aspects of participants' performance. This finding contrasts with the results of other recent studies showing that semantic/stimulus congruency can affect certain multisensory interactions.


Assuntos
Mascaramento Perceptivo/fisiologia , Semântica , Percepção da Fala/fisiologia , Percepção Visual/fisiologia , Estimulação Acústica , Adolescente , Adulto , Discriminação Psicológica/fisiologia , Feminino , Humanos , Masculino , Estimulação Luminosa
16.
Exp Brain Res ; 183(3): 399-404, 2007 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-17899043

RESUMO

One of the classic examples of multisensory integration in humans occurs when speech sounds are combined with the sight of corresponding articulatory gestures. Despite the longstanding assumption that this kind of audiovisual binding operates in an attention-free mode, recent findings (Alsius et al. in Curr Biol, 15(9):839-843, 2005) suggest that audiovisual speech integration decreases when visual or auditory attentional resources are depleted. The present study addressed the generalization of this attention constraint by testing whether a similar decrease in multisensory integration is observed when attention demands are imposed on a sensory domain that is not involved in speech perception, such as touch. We measured the McGurk illusion in a dual task paradigm involving a difficult tactile task. The results showed that the percentage of visually influenced responses to audiovisual stimuli was reduced when attention was diverted to a tactile task. This finding is attributed to a modulatory effect on audiovisual integration of speech mediated by supramodal attention limitations. We suggest that the interactions between the attentional system and crossmodal binding mechanisms may be much more extensive and dynamic than it was advanced in previous studies.


Assuntos
Atenção/fisiologia , Acústica da Fala , Percepção da Fala/fisiologia , Tato/fisiologia , Percepção Visual/fisiologia , Estimulação Acústica/métodos , Adolescente , Adulto , Feminino , Humanos , Masculino , Estimulação Luminosa/métodos , Tempo de Reação/fisiologia , Fala , Medida da Produção da Fala
17.
Neuroreport ; 18(4): 347-50, 2007 Mar 05.
Artigo em Inglês | MEDLINE | ID: mdl-17435600

RESUMO

It is often claimed that binding information across sensory modalities leads to coherent, unitary mental representations. The dramatic illusions experienced as a result of intersensory conflict, such as the McGurk effect, are often attributed to a propensity of the perceptual system to impose multisensory coherence onto events originating from a common source. In contrast with this ssumption of unity, we report an unexpected ability to resolve the timing between sound and sight regarding multisensory events that induce an illusory reversal of the elements specified in each modality. This finding reveals that the brain can gain access, simultaneously, to unisensory component information as well as to the result of the integrated multisensory percept, suggesting some degree of penetrability in the processes leading to cross-modality binding.


Assuntos
Percepção Auditiva/fisiologia , Encéfalo/fisiologia , Estado de Consciência/fisiologia , Ilusões/fisiologia , Percepção Visual/fisiologia , Estimulação Acústica/métodos , Humanos , Julgamento/fisiologia , Estimulação Luminosa/métodos
18.
Curr Biol ; 15(9): 839-43, 2005 May 10.
Artigo em Inglês | MEDLINE | ID: mdl-15886102

RESUMO

One of the most commonly cited examples of human multisensory integration occurs during exposure to natural speech, when the vocal and the visual aspects of the signal are integrated in a unitary percept. Audiovisual association of facial gestures and vocal sounds has been demonstrated in nonhuman primates and in prelinguistic children, arguing for a general basis for this capacity. One critical question, however, concerns the role of attention in such multisensory integration. Although both behavioral and neurophysiological studies have converged on a preattentive conceptualization of audiovisual speech integration, this mechanism has rarely been measured under conditions of high attentional load, when the observers' attention resources are depleted. We tested the extent to which audiovisual integration was modulated by the amount of available attentional resources by measuring the observers' susceptibility to the classic McGurk illusion in a dual-task paradigm. The proportion of visually influenced responses was severely, and selectively, reduced if participants were concurrently performing an unrelated visual or auditory task. In contrast with the assumption that crossmodal speech integration is automatic, our results suggest that these multisensory binding processes are subject to attentional demands.


Assuntos
Atenção/fisiologia , Percepção Auditiva/fisiologia , Percepção da Fala/fisiologia , Percepção Visual/fisiologia , Estimulação Acústica , Adulto , Humanos , Ilusões/fisiologia , Memória/fisiologia , Estimulação Luminosa , Espanha , Análise e Desempenho de Tarefas
19.
Psychon Bull Rev ; 12(6): 1024-31, 2005 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-16615323

RESUMO

Several studies have established that humans orient their visual attention reflexively in response to social cues such as the direction of someone else's gaze. However, the consequences of this kind of orienting have been addressed only for the visual system. We investigated whether visual social attention cues can induce shifts in tactile attention by combining a central noninformative eye-gaze cue with tactile targets presented to participants' fingertips. Data from speeded detection, speeded discrimination, and signal detection tasks converged on the same conclusion: Eye-gaze-based orienting facilitates the processing of tactile targets at the location of the gazed-at body location. In addition, we examined the effects of other directional cues, such as conventional arrows, and found that they can be equally effective. This is the first demonstration that social attention cues have consequences that reach beyond their own sensory modality.


Assuntos
Atenção , Sinais (Psicologia) , Meio Social , Percepção Espacial/fisiologia , Tato/fisiologia , Humanos , Detecção de Sinal Psicológico , Percepção Visual
20.
Cognition ; 92(3): B13-23, 2004 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-15019556

RESUMO

The McGurk effect is usually presented as an example of fast, automatic, multisensory integration. We report a series of experiments designed to directly assess these claims. We used a syllabic version of the speeded classification paradigm, whereby response latencies to the first (target) syllable of spoken word-like stimuli are slowed down when the second (irrelevant) syllable varies from trial to trial. This interference effect is interpreted as a failure of selective attention to filter out the irrelevant syllable. In Experiment 1 we reproduced the syllabic interference effect with bimodal stimuli containing auditory as well as visual lip movement information, thus confirming the generalizability of the phenomenon. In subsequent experiments we were able to produce (Experiment 2) and to eliminate (Experiment 3) syllabic interference by introducing 'illusory' (McGurk) audiovisual stimuli in the irrelevant syllable, suggesting that audiovisual integration occurs prior to attentional selection in this paradigm.


Assuntos
Percepção Auditiva/fisiologia , Percepção da Fala/fisiologia , Percepção Visual/fisiologia , Adulto , Feminino , Humanos , Masculino , Processos Mentais , Tempo de Reação , Análise e Desempenho de Tarefas
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...